computing architecture
Service Discovery-Based Hybrid Network Middleware for Efficient Communication in Distributed Robotic Systems
Robotic middleware is fundamental to ensuring reliable communication among system components and is crucial for intelligent robotics, autonomous vehicles, and smart manufacturing. However, existing robotic middleware often struggles to meet the diverse communication demands, optimize data transmission efficiency, and maintain scheduling determinism between Orin computing units in large-scale L4 autonomous vehicle deployments. This paper presents RIMAOS2C, a service discovery-based hybrid network communication middleware designed to tackle these challenges. By leveraging multi-level service discovery multicast, RIMAOS2C supports a wide variety of communication modes, including multiple cross-chip Ethernet protocols and PCIe communication capabilities. Its core mechanism, the Message Bridge, optimizes data flow forwarding and employs shared memory for centralized message distribution, reducing message redundancy and minimizing transmission delay uncertainty. Tested on L4 vehicles and Jetson Orin domain controllers, RIMAOS2C leverages TCP-based ZeroMQ to overcome the large-message transmission bottleneck in native CyberRT. In scenarios with two cross-chip subscribers, it eliminates message redundancy and improves large-data transmission efficiency by 36 to 40 percent while reducing callback latency variation by 42 to 906 percent. This research advances the communication capabilities of robotic operating systems and proposes a novel approach to optimizing communication in distributed computing architectures for autonomous driving.
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- North America > United States > Texas > Harris County > Houston (0.04)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- (6 more...)
- Information Technology (0.71)
- Transportation > Ground > Road (0.70)
Leveraging Cloud-Fog Automation for Autonomous Collision Detection and Classification in Intelligent Unmanned Surface Vehicles
Tran, Thien, Nguyen, Quang, Kua, Jonathan, Tran, Minh, Luu, Toan, Hoang, Thuong, Jin, Jiong
Industrial Cyber-Physical Systems (ICPS) technologies are foundational in driving maritime autonomy, particularly for Unmanned Surface Vehicles (USVs). However, onboard computational constraints and communication latency significantly restrict real-time data processing, analysis, and predictive modeling, hence limiting the scalability and responsiveness of maritime ICPS. To overcome these challenges, we propose a distributed Cloud-Edge-IoT architecture tailored for maritime ICPS by leveraging design principles from the recently proposed Cloud-Fog Automation paradigm. Our proposed architecture comprises three hierarchical layers: a Cloud Layer for centralized and decentralized data aggregation, advanced analytics, and future model refinement; an Edge Layer that executes localized AI-driven processing and decision-making; and an IoT Layer responsible for low-latency sensor data acquisition. Our experimental results demonstrated improvements in computational efficiency, responsiveness, and scalability. When compared with our conventional approaches, we achieved a classification accuracy of 86\%, with an improved latency performance. By adopting Cloud-Fog Automation, we address the low-latency processing constraints and scalability challenges in maritime ICPS applications. Our work offers a practical, modular, and scalable framework to advance robust autonomy and AI-driven decision-making and autonomy for intelligent USVs in future maritime ICPS.
- Asia > Singapore (0.05)
- Asia > Vietnam (0.05)
- Oceania > Australia > Tasmania (0.04)
- Europe > United Kingdom > England > West Midlands > Birmingham (0.04)
SAMEdge: An Edge-cloud Video Analytics Architecture for the Segment Anything Model
Lu, Rui, Shi, Siping, Liu, Yanting, Wang, Dan
As artificial intelligence continues to evolve, it is increasingly capable of handling a wide range of video analytics tasks with merely one large model. One of the key foundation technologies is the Segment Anything Model (SAM), which allows the video analytics tasks to be determined on the fly according to the input prompts from the user. However, achieving real-time response in video analytics applications is crucial for user experiences due to the limited communication and computation resources on the edge, especially with SAM, where users may continuously interact by adding or adjusting prompts. In this paper, we propose SAMEdge, a novel edge-cloud computing architecture designed to support SAM computations for edge users. SAMEdge integrates new modules on the edge and the cloud to maximize analytics accuracy under visual prompts and image prompts input with latency constraints. It addresses resource challenges associated with prompt encoding and image encoding by offering a visual prompt transformation algorithm for visual prompts and efficient workload partitioning for image encoding. SAMEdge is implemented by extending the open-source SAM project from Meta AI. We demonstrate the practical application of SAMEdge through a case study on a Visual Tour Guide application. Our evaluation indicates that SAMEdge significantly enhances the accuracy of the video analytics application under distinct network bandwidths across various prompts.
- Asia > China > Hong Kong (0.05)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- North America > United States > Maryland > Baltimore (0.04)
- (3 more...)
Parallel Computing Architectures for Robotic Applications: A Comprehensive Review
With the growing complexity and capability of contemporary robotic systems, the necessity of sophisticated computing solutions to efficiently handle tasks such as real-time processing, sensor integration, decision-making, and control algorithms is also increasing. Conventional serial computing frequently fails to meet these requirements, underscoring the necessity for high-performance computing alternatives. Parallel computing, the utilization of several processing elements simultaneously to solve computational problems, offers a possible answer. Various parallel computing designs, such as multi-core CPUs, GPUs, FPGAs, and distributed systems, provide substantial enhancements in processing capacity and efficiency. By utilizing these architectures, robotic systems can attain improved performance in functionalities such as real-time image processing, sensor fusion, and path planning. The transformative potential of parallel computing architectures in advancing robotic technology has been underscored, real-life case studies of these architectures in the robotics field have been discussed, and comparisons are presented. Challenges pertaining to these architectures have been explored, and possible solutions have been mentioned for further research and enhancement of the robotic applications.
- Asia > India (0.04)
- North America > United States > Pennsylvania (0.04)
- North America > United States > Colorado (0.04)
- (5 more...)
- Research Report (0.82)
- Overview (0.64)
- Health & Medicine (1.00)
- Energy (1.00)
- Information Technology > Robotics & Automation (0.93)
- (2 more...)
Miniaturized, energy efficient, computer chip is faster than silicon
Artificial intelligence presents a major challenge to conventional computing architecture. In standard models, memory storage and computing take place in different parts of the machine, and data must move from its area of storage to a CPU or GPU for processing. The problem with this design is that movement takes time. You can have the most powerful processing unit on the market, but its performance will be limited as it idles waiting for data, a problem known as the "memory wall" or "bottleneck." When computing outperforms memory transfer, latency is unavoidable.
- Information Technology (0.71)
- Government > Regional Government > North America Government > United States Government (0.48)
- Energy (0.48)
- Health & Medicine > Pharmaceuticals & Biotechnology (0.30)
- Information Technology > Hardware (1.00)
- Information Technology > Artificial Intelligence > Machine Learning (0.73)
- Information Technology > Artificial Intelligence > Robots (0.48)
Making AI-Powered Devices Smart Using Neuromorphic Computing
AI is powering smart products that are transforming industries with increased demand from customer-end and business-end. As per Accenture's report, one smart home product division is projected to be worth US$135 billion by 2035. Soon, everything around us will be "smart" with devices that can be controlled by voice and gesture. From home entertainment to interiors and home improvement, devices will have increased autonomy freeing us of mundane activities. Robot vacuums have already started to take their place in homes, but imagine it replacing human cleaners at department stores?
Power and Performance Efficient SDN-Enabled Fog Architecture
Akhunzada, Adnan, Zeadally, Sherali, Islam, Saif ul
Software Defined Networks (SDNs) have dramatically simplified network management. However, enabling pure SDNs to respond in real-time while handling massive amounts of data still remains a challenging task. In contrast, fog computing has strong potential to serve large surges of data in real-time. SDN control plane enables innovation, and greatly simplifies network operations and management thereby providing a promising solution to implement energy and performance aware SDN-enabled fog computing. Besides, power efficiency and performance evaluation in SDN-enabled fog computing is an area that has not yet been fully explored by the research community. We present a novel SDN-enabled fog architecture to improve power efficacy and performance by leveraging cooperative and non-cooperative policy-based computing. Preliminary results from extensive simulation demonstrate an improvement in the power utilization as well as the overall performance (i.e., processing time, response time). Finally, we discuss several open research issues that need further investigation in the future.
- Europe (1.00)
- North America > United States > California > Los Angeles County > Los Angeles (0.28)
How "green" is your Artificial Intelligence?
Artificial intelligence (AI) systems face a set of conflicting goals: being accurate (consuming large amounts of computational power and electrical power) and being accessible (being lower in cost, less computationally intensive, and less power-hungry). Unfortunately, many of today's AI implementations are environmentally unsustainable. Improvements in AI energy efficiency will be driven by several factors, including more efficient algorithms, more efficient computing architectures, and more efficient components. It's necessary to measure and track the energy consumption of AI systems to identify any improvements in energy efficiency. One example of the increasing awareness of the importance of energy consumption in AI systems is having is reflected in the fact that the ULPMark (ultra-low power) benchmark line from EEMBC is now adding ML inference and developing a new benchmark, the ULPMark-ML.
Quantum AI is still years from enterprise prime time
Quantum computing's potential to revolutionize AI depends on growth of a developer ecosystem in which suitable tools, skills, and platforms are in abundance. These milestones are all still at least a few years in the future. What follows is an analysis of the quantum AI industry's maturity at the present time. Quantum AI executes ML (machine learning), DL (deep learning), and other data-driven AI algorithms reasonably well. As an approach, quantum AI has moved well beyond the proof-of-concept stage.
Energy-saving designs for data-intensive computer processing
Researchers have demonstrated methods for both designing innovative data-centric computing hardware and co-designing hardware with machine-learning algorithms that together could improve energy efficiency by as much as two orders of magnitude. Advances in machine learning have ushered in a new era of computing -- the data-centric era -- and are forcing engineers to rethink aspects of computing architecture that have gone mostly unchallenged for 75 years. "The problem is that for large-scale deep neural networks, which are state-of-the-art for machine learning today, more than 90% of the electricity needed to run the entire system is consumed in moving data between the memory and processor," said Yingyan Lin, an assistant professor of electrical and computer engineering. Lin and collaborators proposed two complementary methods for optimizing data-centric processing, both of which were presented at the International Symposium on Computer Architecture (ISCA), a conference for new ideas and research in computer architecture. The drive for data-centric architecture is related to a problem called the von Neumann bottleneck, an inefficiency that stems from the separation of memory and processing in the computing architecture that has reigned supreme since mathematician John von Neumann developed it in 1945.
- North America > United States > Texas (0.05)
- North America > United States > California > Santa Barbara County > Santa Barbara (0.05)